Goto

Collaborating Authors

 custom code


ANFIS-based prediction of power generation for combined cycle power plant

Pa, Mary, Kazemi, Amin

arXiv.org Artificial Intelligence

This paper presents the application of an adaptive neuro-fuzzy inference system (ANFIS) to predict the generated electrical power in a combined cycle power plant. The ANFIS architecture is implemented in MATLAB through a code that utilizes a hybrid algorithm that combines gradient descent and the least square estimator to train the network. The Model is verified by applying it to approximate a nonlinear equation with three variables, the time series Mackey-Glass equation and the ANFIS toolbox in MATLAB. Once its validity is confirmed, ANFIS is implemented to forecast the generated electrical power by the power plant. The ANFIS has three inputs: temperature, pressure, and relative humidity. Each input is fuzzified by three Gaussian membership functions. The first-order Sugeno type defuzzification approach is utilized to evaluate a crisp output. Proposed ANFIS is cable of successfully predicting power generation with extremely high accuracy and being much faster than Toolbox, which makes it a promising tool for energy generation applications.


Bea Stollnitz - Creating batch endpoints in Azure ML

#artificialintelligence

Suppose you've trained a machine learning model to accomplish some task, and you'd now like to provide that model's inference capabilities as a service. Maybe you're writing an application of your own that will rely on this service, or perhaps you want to make the service available to others. This is the purpose of endpoints -- they provide a simple web-based API for feeding data to your model and getting back inference results. Azure ML currently supports three types of endpoints: batch endpoints, Kubernetes online endpoints, and managed online endpoints. I'm going to focus on batch endpoints in this post, but let me start by explaining how the three types differ. Batch endpoints are designed to handle large requests, working asynchronously and generating results that are held in blob storage.


Top 11 Tools For Distributed Machine Learning

#artificialintelligence

Centralised systems employ a strictly hierarchical approach. But the distributed system consists of a network of independent nodes and where no specific roles are assigned to certain nodes. A centralised solution is not the right choice when data is inherently distributed or too big to store on single machines. For instance, think about astronomical data that is too large to move and centralise. In a recent work published by the researchers at Delft University of Technology, Netherlands, they wrote in detail about the current state-of-the-art distributed ML models and how they affect computation latency and other attributes.


AI in the cloud: AWS makes machine learning more accessible for developers - SiliconANGLE

#artificialintelligence

Amazon Web Services Inc.'s re:Invent conference is still nearly a week away, but you wouldn't know it from the sheer number of new products and updates its announced in recent days -- especially in artificial intelligence, likely to be a key focus of the conference. Following last week's storage announcements and its "internet of things" updates on Monday, AWS today introduced new features aimed at making it easier for developers to add AI predictions to their applications and services. The central idea is to put Amazon's machine learning technology in reach of more developers, AWS principal Matt Asay said in a blog post. Machine learning predictions will soon be able to run on unstructured or relational data in Amazon S3, its main storage service, and Amazon Aurora, which is a cloud-hosted MySQL and PostgreSQL-compatible relational database service. What that means is that customers will be able to train machine learning models in SQL using Aurora or AWS Athena, which is an interactive query service for analyzing data in S3.


Amazon simplifies incorporating AI predictions into apps and services

#artificialintelligence

Amazon's re:Invent 2019 conference is nearly two weeks out, but try telling that to Amazon Web Services (AWS) -- it's unveiling new products left and right. Following on the heels of Alexa on AWS Core and new languages Amazon Translate and Transcribe, AWS today detailed features designed to make adding AI predictions to apps and services easier than before. Amazon says that machine learning predictions will soon run on unstructured or relational data in Amazon S3 or Aurora, AWS' cloud-hosted MySQL and PostgreSQL-compatible relational database service. Customers will be able to train models in Amazon's SageMaker platform and run predictions against those models with SQL using Aurora or Athena, Amazon's interactive query service for analyzing data in Amazon S3. The benefits extend to QuickSight, the AWS component that lets customers create and publish dashboards that spotlight AI insights.


Rule As a Code -- SureLog Correlation Engine and Beyond

#artificialintelligence

SureLog SIEM is a security platform which differs from many SIEM products. The main difference is; correlation engine which you can develop your own logic with a High-Level Domain-specific Language. There is no restriction in the logic because you can develop your logic in JAVA including Machine learning, statistical methods and artificial intelligence. SureLog is ready for the fallowing ML libraries also. SureLog has a correlation engine and has a feature called Rule As a Code which is Rule Code.


New in Stream Analytics: Machine Learning, online scaling, custom code, and more

#artificialintelligence

Azure Stream Analytics is a fully managed Platform as a Service (PaaS) that supports thousands of mission-critical customer applications powered by real-time insights. Out-of-the-box integration with numerous other Azure services enables developers and data engineers to build high-performance, hot-path data pipelines within minutes. The key tenets of Stream Analytics include Ease of use, Developer productivity, and Enterprise readiness. Today, we're announcing several new features that further enhance these key tenets. Let's take a closer look at these features: Rollout of these preview features begins November 4th, 2019.


New in Stream Analytics: Machine Learning, online scaling, custom code, and more

#artificialintelligence

Azure Stream Analytics is a fully managed Platform as a Service (PaaS) that supports thousands of mission-critical customer applications powered by real-time insights. Out-of-the-box integration with numerous other Azure services enables developers and data engineers to build high-performance, hot-path data pipelines within minutes. The key tenets of Stream Analytics include Ease of use, Developer productivity, and Enterprise readiness. Today, we're announcing several new features that further enhance these key tenets. Let's take a closer look at these features: Rollout of these preview features begins November 4th, 2019.